Unlike rule-based static analysis tools, Claude Code Security applies contextual reasoning (Opus 4.6) to identify complex flaws – including vulnerabilities that had remained undetected for years. Even in its current “human-in-the-loop” mode, where developers review and approve changes, the signal was strong enough to move markets.
Within a single trading day, major security vendors lost billions in market capitalization. Investors were not reacting to a feature update. They were reacting to a possibility: that AI reasoning could challenge the foundations of application security as we know it.
The end of security tools?
The implicit thesis was clear: if an AI system can “understand” code contextually and suggest patches on its own, it could replace core parts of application security. The shift from rule-based analysis to AI-driven reasoning was quickly described as a paradigm change.
In reality, the situation is more nuanced. Claude Code Security does not replace endpoint detection, identity management, or real-time threat monitoring. It operates in the area of code scanning.
Do we really believe that a generalist – no matter how powerful – can dominate a field like IT security, with its established structures and highly specialized tools?
Security has never been just a pattern recognition problem. It has always been a matter of human judgment.
Tools remain valuable
In his Nicomachean Ethics, Aristotle distinguished between episteme (theoretical knowledge), techne (practical skill), and phronesis – practical wisdom, the ability to make sound decisions in concrete situations. Security operates exactly in this space: incident experience, regulatory requirements, industry-specific attack patterns, operational resilience – all of this has been condensed over years into specialized tools. These systems are not just code; they are experience embedded in code.
In interviews with industry leaders, it becomes clear that even from a pro-AI perspective, the discussion is not about replacement, but about use. Jensen Huang, CEO of Nvidia, puts it simply: “Agents won’t replace the tools, but agents will use tools.” And he adds: “Why rewrite the browser when the browser exists and just use it?”
Tools exist for good reasons. They bundle knowledge, structure decisions, and encode implicit heuristics. Agents will increasingly use these tools – but they will not make them obsolete.
There is no doubt: AI supports security experts
In practice, AI accelerates threat modeling, supports code reviews, analyzes dependencies, and detects anomalies. In well-structured environments, a clear leverage effect appears: expertise is amplified, routine work is reduced.
The key question, however, is who takes responsibility. Risk acceptance is not a statistical function, and liability is not a feature of a model. AI can strengthen expertise – but it does not replace the instance that ultimately carries responsibility. This distinction is discussed in sessions such as May the Code Be Secure and in the full-day workshop Secure Your GitHub Pipeline at our IT Security Summit in June 2026. The focus there is not on the next tool, but on integrating AI into existing responsibility structures.
When Security Becomes an Architectural Question
This broader shift is also visible in the latest OWASP Top 10, the 2025 edition. Topics such as software supply chain failures, insecure design, and deficiencies in logging and alerting dominate. These are no longer isolated vulnerabilities; they are architecture and process issues.
Security emerges in architecture decisions, in external dependencies, and in the way services communicate with each other. It is not a single event, but a continuous state within the overall system.
If security touches architecture, an organizational consequence follows. Supply chain, infrastructure, and observability affect multiple disciplines. Security specialists become co-designers – working alongside architects, developers, and operations teams. DevSecOps is therefore a response to systems that cannot be secured by isolated teams.
Service Mesh – infrastructure for collaboration
This collaboration can also be anchored in infrastructure. A service mesh, for example using Istio Ambient, makes mTLS the default, enforces zero-trust principles at platform level, enables centralized policy enforcement, and creates transparent observability. Security is not simply added at the application layer; it is implemented within the system itself. In the two-day Service Mesh bootcamp led by Michael Hofmann, this shift is explored in practice.
AI agents – diligent assistants
Autonomous agents operate across system boundaries and use existing tools. Attacks increasingly unfold across systems. Governance becomes central: Who defines policies? Who is responsible for machine identities? Sessions such as Ensuring Security and Compliance in the Cloud by Governing Autonomous Agents and Threat Modeling Agentic AI Systems Across Five Threat Zones at the IT Security Summit address exactly these dynamics. Security becomes a continuous design task as systems change, scale, and interact across boundaries.
What remains constant is responsibility
Security 2026 operates in systems that are more complex, more connected, and supported by increasingly powerful tools. AI accelerates many processes – it changes the methodology and economics of software security. What remains constant is responsibility: grounded in judgment, anchored in architecture, organized in collaboration.
The IT Security Summit is not a showcase for individual technologies, but a forum to discuss exactly these interconnections.
Author
🔍 FAQ
1. 1. Will AI replace traditional security tools?
No. AI enhances certain capabilities—especially code scanning and vulnerability prioritization—but it does not replace endpoint protection, identity management, monitoring, or governance frameworks. AI augments tools; it doesn’t eliminate them.
2. 2. What makes AI-driven code analysis different from traditional tools?
Traditional static analysis relies on predefined rules and patterns. AI systems apply contextual reasoning across entire codebases, identifying complex or long-standing flaws and proposing fixes. The difference lies in reasoning depth—not in replacing the broader security ecosystem.
3. 3. If AI can suggest patches, why do we still need human review?
Because risk acceptance, compliance decisions, and liability cannot be automated. AI can recommend actions—but it cannot take responsibility for business impact, regulatory exposure, or architectural trade-offs.
4. 4. Does AI reduce the need for DevSecOps?
No. In fact, it reinforces DevSecOps. As AI accelerates reviews and threat modeling, the integration between development, security, and operations becomes even more critical. Faster detection increases the need for coordinated ownership.



